adaptive sparsity
Movement Pruning: Adaptive Sparsity by Fine-Tuning
Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; however, it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications. We propose the use of movement pruning, a simple, deterministic first-order weight pruning method that is more adaptive to pretrained model fine-tuning. We give mathematical foundations to the method and compare it to existing zeroth-and first-order pruning methods. Experiments show that when pruning large pretrained language models, movement pruning shows significant improvements in high-sparsity regimes. When combined with distillation, the approach achieves minimal accuracy loss with down to only 3% of the model parameters.
Review for NeurIPS paper: Movement Pruning: Adaptive Sparsity by Fine-Tuning
Additional Feedback: - For the results in Figure 2, what does the x-axis represent for models with different numbers of parameters? For example, if a MiniBERT model has half as many parameters as BERT-base, then comparing "10% remaining weights" seems a bit unfair. What would the figure look like if the x-axis were instead the number of non-zero parameters? - You evaluate on one span extraction and two paired sentence classification tasks, but no single sentence classification tasks. Why not replace one of the sentence pair tasks with SST-2, for example? I expect the results would be similar, but it would make the experiments a bit more compelling.
Movement Pruning: Adaptive Sparsity by Fine-Tuning
Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; however, it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications. We propose the use of movement pruning, a simple, deterministic first-order weight pruning method that is more adaptive to pretrained model fine-tuning. We give mathematical foundations to the method and compare it to existing zeroth- and first-order pruning methods. Experiments show that when pruning large pretrained language models, movement pruning shows significant improvements in high-sparsity regimes. When combined with distillation, the approach achieves minimal accuracy loss with down to only 3% of the model parameters.